network learn high frequency function
Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains
We show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron (MLP) to learn high-frequency functions in low-dimensional problem domains. These results shed light on recent advances in computer vision and graphics that achieve state-of-the-art results by using MLPs to represent complex 3D objects and scenes. Using tools from the neural tangent kernel (NTK) literature, we show that a standard MLP has impractically slow convergence to high frequency signal components. To overcome this spectral bias, we use a Fourier feature mapping to transform the effective NTK into a stationary kernel with a tunable bandwidth. We suggest an approach for selecting problem-specific Fourier features that greatly improves the performance of MLPs for low-dimensional regression tasks relevant to the computer vision and graphics communities.
Review for NeurIPS paper: Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains
Summary and Contributions: Using tools from the neural tangent kernel (NTK) literature, the authors show that a standard multilayer perceptron fails to learn high frequencies both in theory and in practice. To overcome this spectral bias, they use a Fourier feature mapping to transform the effective NTK into a stationary kernel with a tunable bandwidth. The paper relies on applying the Fourier features work by Rahimi and Recht to approximate the NTK kernel. The main contributions of this paper are two fold: applying an existing seminal method to a new problem which leads to surprising and interesting findings of relevance to practitioners in deep learning; and 2) a detailed empirical study of the NTK (and its approximation) to several different image related applications .
Review for NeurIPS paper: Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains
Using NTK theory the authors show that a standard multilayer perceptron fails to learn high frequencies both in theory and in practice. The authors then use a Fourier feature mapping to transform to overcome this bias. The experimental results also demonstrate that, with the same amount of training points (e,g. All reviewers thought the paper contains a very rich set of experiments and interesting numerical results. The reviewers raised various technical concerns in their reviews but thought that the authors' response adequately addressed these concerns and multiple reviewers raised their score.
Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains
We show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron (MLP) to learn high-frequency functions in low-dimensional problem domains. These results shed light on recent advances in computer vision and graphics that achieve state-of-the-art results by using MLPs to represent complex 3D objects and scenes. Using tools from the neural tangent kernel (NTK) literature, we show that a standard MLP has impractically slow convergence to high frequency signal components. To overcome this spectral bias, we use a Fourier feature mapping to transform the effective NTK into a stationary kernel with a tunable bandwidth. We suggest an approach for selecting problem-specific Fourier features that greatly improves the performance of MLPs for low-dimensional regression tasks relevant to the computer vision and graphics communities.